I Was a Top 1% ChatGPT User Since 2022. Here's Why I'm Leaving and Following My Values.

By Quinn Karley, Ed.M. | Mountain Rise Partners

March 4, 2026

I remember the first time I used ChatGPT.

It was November 2022. I was curious, amazed, in disbelief, and within minutes I had a thinking partner. A collaborator, and a tool to help me organize and communicate my ideas and thoughts more clearly.

I was hooked. And I went deep.

Over the next three years, I became a top 1% user. Not as a casual experimenter, but as someone who wanted to understand and learn as much as she could about how this new technology could help people do more with ease. I built Mountain Rise Partners around the belief that Generative AI, used thoughtfully, could give people back time, reduce burnout, and help create workplaces where humans actually thrive. Tech for Good!

So yes, leaving feels like a breakup after a 4 year relationship of everyday interaction, and there is a sadness and wish that it was different.

The Moment I Couldn't Look Away

Last week, something happened that I've been worried about for years.

Anthropic — the company behind Claude, drew an ethical line with the U.S. Pentagon. They said: our AI will not be used for mass domestic surveillance of American citizens, and it will not power fully autonomous weapons. The government said no to those limits.

I'm not here to litigate the legal fine print. I'm here to talk about what this moment means to me.

Tools Carry the Values of the Hands That Built Them

At Mountain Rise Partners, one of our core values is Non-Exploitative Technology, the belief that technology should support people, not extract from them. We advocate for responsible, ethical, and transparent use of AI and digital tools, especially in public and community-serving organizations.

That value isn't just something we put on our website. It's the lens through which we choose every tool we recommend, every system we help build, and every partnership we enter.

When you choose a tool or partner for your organization, you're not just choosing features and pricing. You're choosing the values baked into that product. You're choosing who gets to decide how that technology is used, and against whom.

Anthropic took a stand that cost them. Their CEO didn't give a press release full of hedges, he wrote: "We cannot in good conscience accede to their request." And when the administration retaliated, Anthropic said they'd take legal action and didn't blink.

That's alignment between stated values and actual behavior under pressure.

That matters to me.

This Is the Conversation We Need to Be Having at Work

Here's what I know from the inside of organizations: most people adopting AI tools right now are not asking these questions. They're asking "what can it do for me?" and "how do I get started?", which are valid and important questions that I love helping people answer.

But values questions are practical questions.

  • What happens to the data your employees enter into these systems?

  • Who has access to it, and under what conditions?

  • If the company behind your tool makes a deal with a government body that contradicts your own organizational values — are you aware? Does it matter to you?

These are not hypothetical concerns for tech ethicists. These are real questions for HR leaders, operations managers, team leads, and founders choosing tools today.

We also believe deeply in Safety & Well-Being, that healthy people build healthy systems, and that our work should prioritize psychological safety and sustainable expectations. That principle applies to the technology we use just as much as to the environments we create. Tools that normalize surveillance or autonomous harm do not build safe systems. They erode them.

What I'm Doing Now

I'm moving to Claude as my primary AI tool, not because it's perfect, but because Anthropic's actions matched their words when it actually cost something.

At Mountain Rise Partners, we believe in Kindness & Compassion, that good work begins with how people are treated. And we believe in Inclusion, that technology should create environments where people feel safe, not threatened, not disposable.

The name of our company is Mountain Rise Partners. "Rise" is intentional. We believe AI should elevate human beings, not surveil them, not replace them, not render them disposable. It should make work more human, not less.

Our tagline says it plainly: Technology in service of people, community, and thoughtful progress.

I think the leaders who are asking these questions now are going to be the ones their teams trust most later.

The Question Worth Sitting With

If the AI company behind your most-used tool made a deal that compromised your values — would you notice? Would it change anything for you?